19 research outputs found

    Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    Get PDF
    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions

    Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    Get PDF
    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures

    Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    Get PDF
    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task

    Predicting the Unpredictable: Estimating Human Performance Parameters for Off-Nominal Events

    Get PDF
    A parameter meta-analysis was conducted to characterize human responses to off-nominal events. The probability of detecting an off-nominal event was influenced by characteristics of the offnominal event scenario (phase of flight, expectancy, and event location) and the presence of advanced cockpit technologies (head-up displays, highway-in-the-sky displays, datalink, and graphical route displays). The results revealed that the presence of these advanced technologies hindered event detection reflecting cognitive tunneling and pilot complacency effects

    Circadian Effects on Simple Components of Complex Task Performance

    Get PDF
    The goal of this study was to advance understanding and prediction of the impact of circadian rhythm on aspects of complex task performance during unexpected automation failures, and subsequent fault management. Participants trained on two tasks: a process control simulation, featuring automated support; and a multi-tasking platform. Participants then completed one task in a very early morning (circadian night) session, and the other during a late afternoon (circadian day) session. Small effects of time of day were seen on simple components of task performance, but impacts on more demanding components, such as those that occur following an automation failure, were muted relative to previous studies where circadian rhythm was compounded with sleep deprivation and fatigue. Circadian low participants engaged in compensatory strategies, rather than passively monitoring the automation. The findings and implications are discussed in the context of a model that includes the effects of sleep and fatigue factors

    MIDAS-FAST: Design and Validation of a Model-Based Tool to Predict Operator Performance with Robotic Arm Automation

    Get PDF
    The Coalition for Aerospace and Science (CAS) is hosting an exhibition on Capitol Hill on June 14, 2017, to highlight the contributions of CAS members to NASAs portfolio of activities. This exhibition represents an opportunity for an HFES members ground breaking work to be displayed and to build on support within Congress for NASAs human research program including in those areas that are of specific interest to the HFE community. The intent of this poster presentation is to demonstrate the positive outcome that comes from funding HFE related research on a project like the one exemplified by MIDAS-FAST

    Modeling and Evaluating Pilot Performance in NextGen: Review of and Recommendations Regarding Pilot Modeling Efforts, Architectures, and Validation Studies

    Get PDF
    NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations

    Flight Deck Models of Workload and Multi-Tasking: An Overview of Validation

    Get PDF
    We review 24 computational modeling efforts of pilot multi-task performance and workload, to describe the manner in which they model three different aspects of pilot performance: the complexity of effort, the complexity of time management and the complexity of multiple resource interference. We then discuss the degree of validation of these models, and the validity of the context in which they are validated

    Review of Pilot Performance and Pilot-Automation Interaction Models in Support of Nextgen

    Get PDF
    Computational models of aircraft pilot performance will gain importance over the next decades, as major evolutions in the national airspace continue to emerge with the NextGen program. Evaluating new technology, or procedures such as self-separation, requires time and resourceconsuming pilot-in-the-loop (PITL) simulations. Models can augment PITL findings and they can help to constrain the scope of PITL simulations. If they are validated, such computational models may actually answer some design questions in place of PITL simulations. This paper summarizes a review of modeling efforts to address pilot performance, and elaborates on pilot-automation interaction models

    Automation for Human-Robotic Interaction: Modeling and Predicting Operator Performance

    Get PDF
    Human-robotic interaction presents numerous challenges to designers and operators. One way to address these challenges is through task automation. However, appropriate application of automation, reducing workload while keeping the operator informed and in control, without causing skill degradation, is not generally understood. In this paper, we describe a human performance modeling and simulation approach to evaluating the effects of automation on operator and system performance. In this research, we identify and combine relevant factors that affect operator performance into operator-robotic system interaction models. The result of this project will be a partially-validated tool to help system designers evaluate potential automation strategies for their expected effects on operator and system performance
    corecore